Stability Analysis for Regularized Least Squares Regression

نویسنده

  • Cynthia Rudin
چکیده

We discuss stability for a class of learning algorithms with respect to noisy labels. The algorithms we consider are for regression, and they involve the minimization of regularized risk functionals, such as L(f) := 1 N PN i=1(f(xi) yi)+ kfkH. We shall call the algorithm ‘stable’ if, when yi is a noisy version of f (xi) for some function f 2 H, the output of the algorithm converges to f as the regularization term and noise simultaneously vanish. We consider two flavors of this problem, one where a data set of N points remains fixed, and the other where N ! 1. For the case where N ! 1, we give conditions for convergence to fE (the function which is the expectation of y(x) for each x), as ! 0. For the fixed N case, we describe the limiting ’non-noisy’, ’non-regularized’ function f , and give conditions for convergence. In the process, we develop a set of tools for dealing with functionals such as L(f), which are applicable to many other problems in learning theory. keywords statistical learning theory, learning in the limit, regularized least squares regression, RKHS

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularized fuzzy clusterwise ridge regression

Fuzzy clusterwise regression has been a useful method for investigating cluster-level heterogeneity of observations based on linear regression. This method integrates fuzzy clustering and ordinary least-squares regression, thereby enabling to estimate regression coefficients for each cluster and fuzzy cluster memberships of observations simultaneously. In practice, however, fuzzy clusterwise re...

متن کامل

An Efficient Method for Large-Scale l1-Regularized Convex Loss Minimization

Convex loss minimization with l1 regularization has been proposed as a promising method for feature selection in classification (e.g., l1-regularized logistic regression) and regression (e.g., l1-regularized least squares). In this paper we describe an efficient interior-point method for solving large-scale l1-regularized convex loss minimization problems that uses a preconditioned conjugate gr...

متن کامل

Regularized Discriminant Analysis, Ridge Regression and Beyond

Fisher linear discriminant analysis (FDA) and its kernel extension—kernel discriminant analysis (KDA)—are well known methods that consider dimensionality reduction and classification jointly. While widely deployed in practical problems, there are still unresolved issues surrounding their efficient implementation and their relationship with least mean squares procedures. In this paper we address...

متن کامل

Extreme Support Vector Regression

Extreme Support Vector Machine (ESVM), a variant of ELM, is a nonlinear SVM algorithm based on regularized least squares optimization. In this chapter, a regression algorithm, Extreme Support Vector Regression (ESVR), is proposed based on ESVM. Experiments show that, ESVR has a better generalization ability than the traditional ELM.Furthermore, ESVMcan reach comparable accuracy as SVR and LS-SV...

متن کامل

Algorithmic Stability and Meta-Learning

A mechnism of transfer learning is analysed, where samples drawn from different learning tasks of an environment are used to improve the learners performance on a new task. We give a general method to prove generalisation error bounds for such meta-algorithms. The method can be applied to the bias learning model of J. Baxter and to derive novel generalisation bounds for metaalgorithms searching...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/cs/0502016  شماره 

صفحات  -

تاریخ انتشار 2003